The number of international benchmarking competitions is steadily increasing in various fields of machine learning (ML) research and practice. So far, however, little is known about the common practice as well as bottlenecks faced by the community in tackling the research questions posed. To shed light on the status quo of algorithm development in the specific field of biomedical imaging analysis, we designed an international survey that was issued to all participants of challenges conducted in conjunction with the IEEE ISBI 2021 and MICCAI 2021 conferences (80 competitions in total). The survey covered participants' expertise and working environments, their chosen strategies, as well as algorithm characteristics. A median of 72% challenge participants took part in the survey. According to our results, knowledge exchange was the primary incentive (70%) for participation, while the reception of prize money played only a minor role (16%). While a median of 80 working hours was spent on method development, a large portion of participants stated that they did not have enough time for method development (32%). 25% perceived the infrastructure to be a bottleneck. Overall, 94% of all solutions were deep learning-based. Of these, 84% were based on standard architectures. 43% of the respondents reported that the data samples (e.g., images) were too large to be processed at once. This was most commonly addressed by patch-based training (69%), downsampling (37%), and solving 3D analysis tasks as a series of 2D tasks. K-fold cross-validation on the training set was performed by only 37% of the participants and only 50% of the participants performed ensembling based on multiple identical models (61%) or heterogeneous models (39%). 48% of the respondents applied postprocessing steps.
translated by 谷歌翻译
Recently, domain-specific PLMs have been proposed to boost the task performance of specific domains (e.g., biomedical and computer science) by continuing to pre-train general PLMs with domain-specific corpora. However, this Domain-Adaptive Pre-Training (DAPT; Gururangan et al. (2020)) tends to forget the previous general knowledge acquired by general PLMs, which leads to a catastrophic forgetting phenomenon and sub-optimal performance. To alleviate this problem, we propose a new framework of General Memory Augmented Pre-trained Language Model (G-MAP), which augments the domain-specific PLM by a memory representation built from the frozen general PLM without losing any general knowledge. Specifically, we propose a new memory-augmented layer, and based on it, different augmented strategies are explored to build the memory representation and then adaptively fuse it into the domain-specific PLM. We demonstrate the effectiveness of G-MAP on various domains (biomedical and computer science publications, news, and reviews) and different kinds (text classification, QA, NER) of tasks, and the extensive results show that the proposed G-MAP can achieve SOTA results on all tasks.
translated by 谷歌翻译
Deep learning (DL)-based tomographic SAR imaging algorithms are gradually being studied. Typically, they use an unfolding network to mimic the iterative calculation of the classical compressive sensing (CS)-based methods and process each range-azimuth unit individually. However, only one-dimensional features are effectively utilized in this way. The correlation between adjacent resolution units is ignored directly. To address that, we propose a new model-data-driven network to achieve tomoSAR imaging based on multi-dimensional features. Guided by the deep unfolding methodology, a two-dimensional deep unfolding imaging network is constructed. On the basis of it, we add two 2D processing modules, both convolutional encoder-decoder structures, to enhance multi-dimensional features of the imaging scene effectively. Meanwhile, to train the proposed multifeature-based imaging network, we construct a tomoSAR simulation dataset consisting entirely of simulation data of buildings. Experiments verify the effectiveness of the model. Compared with the conventional CS-based FISTA method and DL-based gamma-Net method, the result of our proposed method has better performance on completeness while having decent imaging accuracy.
translated by 谷歌翻译
Benefiting from a relatively larger aperture's angle, and in combination with a wide transmitting bandwidth, near-field synthetic aperture radar (SAR) provides a high-resolution image of a target's scattering distribution-hot spots. Meanwhile, imaging result suffers inevitable degradation from sidelobes, clutters, and noises, hindering the information retrieval of the target. To restore the image, current methods make simplified assumptions; for example, the point spread function (PSF) is spatially consistent, the target consists of sparse point scatters, etc. Thus, they achieve limited restoration performance in terms of the target's shape, especially for complex targets. To address these issues, a preliminary study is conducted on restoration with the recent promising deep learning inverse technique in this work. We reformulate the degradation model into a spatially variable complex-convolution model, where the near-field SAR's system response is considered. Adhering to it, a model-based deep learning network is designed to restore the image. A simulated degraded image dataset from multiple complex target models is constructed to validate the network. All the images are formulated using the electromagnetic simulation tool. Experiments on the dataset reveal their effectiveness. Compared with current methods, superior performance is achieved regarding the target's shape and energy estimation.
translated by 谷歌翻译
This work focuses on 3D Radar imaging inverse problems. Current methods obtain undifferentiated results that suffer task-depended information retrieval loss and thus don't meet the task's specific demands well. For example, biased scattering energy may be acceptable for screen imaging but not for scattering diagnosis. To address this issue, we propose a new task-oriented imaging framework. The imaging principle is task-oriented through an analysis phase to obtain task's demands. The imaging model is multi-cognition regularized to embed and fulfill demands. The imaging method is designed to be general-ized, where couplings between cognitions are decoupled and solved individually with approximation and variable-splitting techniques. Tasks include scattering diagnosis, person screen imaging, and parcel screening imaging are given as examples. Experiments on data from two systems indicate that the pro-posed framework outperforms the current ones in task-depended information retrieval.
translated by 谷歌翻译
The role of mobile cameras increased dramatically over the past few years, leading to more and more research in automatic image quality enhancement and RAW photo processing. In this Mobile AI challenge, the target was to develop an efficient end-to-end AI-based image signal processing (ISP) pipeline replacing the standard mobile ISPs that can run on modern smartphone GPUs using TensorFlow Lite. The participants were provided with a large-scale Fujifilm UltraISP dataset consisting of thousands of paired photos captured with a normal mobile camera sensor and a professional 102MP medium-format FujiFilm GFX100 camera. The runtime of the resulting models was evaluated on the Snapdragon's 8 Gen 1 GPU that provides excellent acceleration results for the majority of common deep learning ops. The proposed solutions are compatible with all recent mobile GPUs, being able to process Full HD photos in less than 20-50 milliseconds while achieving high fidelity results. A detailed description of all models developed in this challenge is provided in this paper.
translated by 谷歌翻译
Digital human recommendation system has been developed to help customers find their favorite products and is playing an active role in various recommendation contexts. How to timely catch and learn the dynamics of the preferences of the customers, while meeting their exact requirements, becomes crucial in the digital human recommendation domain. We design a novel practical digital human interactive recommendation agent framework based on Reinforcement Learning(RL) to improve the efficiency of the interactive recommendation decision-making by leveraging both the digital human features and the superior flexibility of RL. Our proposed framework learns through real-time interactions between the digital human and customers dynamically through the state-of-art RL algorithms, combined with multimodal embedding and graph embedding, to improve the accuracy of personalization and thus enable the digital human agent to timely catch the attention of the customer. Experiments on real business data demonstrate that our framework can provide better personalized customer engagement and better customer experiences.
translated by 谷歌翻译
随着卷积神经网络(CNN)的蓬勃发展,诸如VGG-16和Resnet-50之类的CNN广泛用作SAR船检测中的骨架。但是,基于CNN的骨干很难对远程依赖性进行建模,并且导致缺乏浅层特征图中缺乏足够的高质量语义信息,从而导致在复杂的背景和小型船只中的检测性能不佳。为了解决这些问题,我们提出了一种基于SWIN Transformer的SAR船检测方法,并提出了功能增强功能功能金字塔网络(FEFPN)。SWIN Transformer用作建模远程依赖性并生成层次特征图的骨架。提出了FEFPN,以进一步提高特征地图的质量,通过逐渐增强各级特征地图的语义信息,尤其是浅层中的特征地图。在SAR船检测数据集(SSDD)上进行的实验揭示了我们提出的方法的优势。
translated by 谷歌翻译
视频效果旨在通过给定的输入视频序列预测每个帧的α哑光。在过去的几年中,深度卷积神经网络(CNN)的最新解决方案一直由深度卷积神经网络(CNN)主导,这已成为学术界和工业的事实上的标准。但是,它们具有内置的局部归纳性偏见,并且由于基于CNN的架构而不会捕获图像的全局特征。在处理多个帧的特征图时,考虑到计算成本,他们还缺乏远程时间建模。在本文中,我们提出了VMFormer:一种基于变压器的端对端方法,用于视频垫子。它可以通过视频输入序列从可学习的查询中对每个帧的α哑光进行预测。具体而言,它利用自我发挥的层来建立特征序列的全局集成,并在连续帧上使用短距离的时间建模。我们进一步应用查询来通过在所有查询上使用远程时间建模的变压器解码器中的交叉注意来学习全局表示形式。在预测阶段,查询和相应的特征图均用于对Alpha Matte的最终预测。实验表明,VMFormer在合成基准测试上的表现优于先前基于CNN的视频效果方法。据我们所知,这是第一个基于完整视觉变压器建立的端到端视频底漆解决方案,并对可学习的查询进行预测。该项目在https://chrisjuniorli.github.io/project/project/vmformer/上开源
translated by 谷歌翻译
近年来,旨在在衣服变化下与人身份相匹配的换衣人重新识别(CC-REID)是近年来的一个新的研究主题。但是,典型的基于生物识别的CC-REID方法通常需要繁琐的姿势或身体部位估计器来从人类生物特征性状中学习布置性特征,这带有高计算成本。此外,由于监视图像的分辨率下降,性能受到了显着限制。为了解决上述限制,我们为CC-REID提出了一个有效的身份敏感知识传播框架(DECKPRO)。具体而言,引入了一个布 - 丝毫空间注意模块,以通过从人解析模块中获取知识来消除服装外观的注意力。为了减轻人类面孔的分辨率退化问题和对矿山身份敏感的提示,我们建议使用先前的面部知识恢复缺失的面部细节,然后将其传播到较小的网络。训练后,不再需要进行人类解析或面部修复的额外计算。广泛的实验表明,我们的框架的表现优于最先进的方法。我们的代码可在https://github.com/kimbingng/deskpro上找到。
translated by 谷歌翻译